1,406 research outputs found

    Gradient Estimation using Lagrange Interpolation Polynomials

    Get PDF
    In this paper we use Lagrange interpolation polynomials to obtain good gradient estimations.This is e.g. important for nonlinear programming solvers.As an error criterion we take the mean squared error.This error can be split up into a deterministic and a stochastic error.We analyze these errors using (N times replicated) Lagrange interpolation polynomials.We show that the mean squared error is of order N-1+ 1 2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate.As a result the order of the mean squared error converges to N-1 if the number of evaluation points increases to infinity.Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved.Finally, we consider the case of a fixed budget of evaluations.For this situation we provide an optimal division between the number of replicates and the number of evaluations in a replicate.estimation;interpolation;polynomials;non linear programming

    Constrained Optimization Involving Expensive Function Evaluations: A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).optimization;nonlinear programming

    Gradient Estimation Schemes for Noisy Functions

    Get PDF
    In this paper we analyze different schemes for obtaining gradient estimates when the underlying function is noisy.Good gradient estimation is e.g. important for nonlinear programming solvers.As an error criterion we take the norm of the difference between the real and estimated gradients.This error can be split up into a deterministic and a stochastic error.For three finite difference schemes and two Design of Experiments (DoE) schemes we analyze both the deterministic and the stochastic errors.We also derive optimal step sizes for each scheme, such that the total error is minimized.Some of the schemes have the nice property that this step size also minimizes the variance of the error.Based on these results we show that to obtain good gradient estimates for noisy functions it is worthwhile to use DoE schemes.We recommend to implement such schemes in NLP solversnonlinear programming;finite elements;gradient estimation

    Why Methods for Optimization Problems with Time-Consuming Function Evaluations and Integer Variables Should Use Global Approximation Models

    Get PDF
    This paper advocates the use of methods based on global approximation models for optimization problems with time-consuming function evaluations and integer variables.We show that methods based on local approximations may lead to the integer rounding of the optimal solution of the continuous problem, and even to worse solutions.Then we discuss a method based on global approximations.Test results show that such a method performs well, both for theoretical and practical examples, without suffering the disadvantages of methods based on local approximations.approximation models;black-box optimization;integer optimization

    "Wil je snel gaan, ga alleen, wil je ver komen ga dan samen" : onderwijsdag ondernemerschap 21 januari 2010

    Get PDF
    Overzicht van diverse vormen van samenwerking en samenwerkingsprojecten die aan bod zijn gekomen tijdens de Onderwijsdag Ondernemerscha

    Gradient Estimation Schemes for Noisy Functions

    Get PDF
    In this paper we analyze different schemes for obtaining gradient estimates when the underlying function is noisy.Good gradient estimation is e.g. important for nonlinear programming solvers.As an error criterion we take the norm of the difference between the real and estimated gradients.This error can be split up into a deterministic and a stochastic error.For three finite difference schemes and two Design of Experiments (DoE) schemes we analyze both the deterministic and the stochastic errors.We also derive optimal step sizes for each scheme, such that the total error is minimized.Some of the schemes have the nice property that this step size also minimizes the variance of the error.Based on these results we show that to obtain good gradient estimates for noisy functions it is worthwhile to use DoE schemes.We recommend to implement such schemes in NLP solver

    Constrained Optimization Involving Expensive Function Evaluations:A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).
    • ā€¦
    corecore